We propose a new method to estimate causal effects from nonexperimental data. Each pair of sample units is first associated with a stochastic 'treatment' - differences in factors between units - and an effect - a resultant outcome difference. It is then proposed that all such pairs can be combined to provide more accurate estimates of causal effects in observational data, provided a statistical model connecting combinatorial properties of treatments to the accuracy and unbiasedness of their effects. The article introduces one such model and a Bayesian approach to combine the $O(n^2)$ pairwise observations typically available in nonexperimnetal data. This also leads to an interpretation of nonexperimental datasets as incomplete, or noisy, versions of ideal factorial experimental designs. This approach to causal effect estimation has several advantages: (1) it expands the number of observations, converting thousands of individuals into millions of observational treatments; (2) starting with treatments closest to the experimental ideal, it identifies noncausal variables that can be ignored in the future, making estimation easier in each subsequent iteration while departing minimally from experiment-like conditions; (3) it recovers individual causal effects in heterogeneous populations. We evaluate the method in simulations and the National Supported Work (NSW) program, an intensively studied program whose effects are known from randomized field experiments. We demonstrate that the proposed approach recovers causal effects in common NSW samples, as well as in arbitrary subpopulations and an order-of-magnitude larger supersample with the entire national program data, outperforming Statistical, Econometrics and Machine Learning estimators in all cases...
translated by 谷歌翻译
Credit scoring models are the primary instrument used by financial institutions to manage credit risk. The scarcity of research on behavioral scoring is due to the difficult data access. Financial institutions have to maintain the privacy and security of borrowers' information refrain them from collaborating in research initiatives. In this work, we present a methodology that allows us to evaluate the performance of models trained with synthetic data when they are applied to real-world data. Our results show that synthetic data quality is increasingly poor when the number of attributes increases. However, creditworthiness assessment models trained with synthetic data show a reduction of 3\% of AUC and 6\% of KS when compared with models trained with real data. These results have a significant impact since they encourage credit risk investigation from synthetic data, making it possible to maintain borrowers' privacy and to address problems that until now have been hampered by the availability of information.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译
Weakly-supervised object detection (WSOD) models attempt to leverage image-level annotations in lieu of accurate but costly-to-obtain object localization labels. This oftentimes leads to substandard object detection and localization at inference time. To tackle this issue, we propose D2DF2WOD, a Dual-Domain Fully-to-Weakly Supervised Object Detection framework that leverages synthetic data, annotated with precise object localization, to supplement a natural image target domain, where only image-level labels are available. In its warm-up domain adaptation stage, the model learns a fully-supervised object detector (FSOD) to improve the precision of the object proposals in the target domain, and at the same time learns target-domain-specific and detection-aware proposal features. In its main WSOD stage, a WSOD model is specifically tuned to the target domain. The feature extractor and the object proposal generator of the WSOD model are built upon the fine-tuned FSOD model. We test D2DF2WOD on five dual-domain image benchmarks. The results show that our method results in consistently improved object detection and localization compared with state-of-the-art methods.
translated by 谷歌翻译
The renewed interest from the scientific community in machine learning (ML) is opening many new areas of research. Here we focus on how novel trends in ML are providing opportunities to improve the field of computational fluid dynamics (CFD). In particular, we discuss synergies between ML and CFD that have already shown benefits, and we also assess areas that are under development and may produce important benefits in the coming years. We believe that it is also important to emphasize a balanced perspective of cautious optimism for these emerging approaches
translated by 谷歌翻译
This paper presents a proof-of-concept method for classifying chemical compounds directly from NMR data without doing structure elucidation. This can help to reduce time in finding good structure candidates, as in most cases matching must be done by a human engineer, or at the very least a process for matching must be meaningfully interpreted by one. Therefore, for a long time automation in the area of NMR has been actively sought. The method identified as suitable for the classification is a convolutional neural network (CNN). Other methods, including clustering and image registration, have not been found suitable for the task in a comparative analysis. The result shows that deep learning can offer solutions to automation problems in cheminformatics.
translated by 谷歌翻译
在推荐系统中,项目可能会接触到各种用户,我们想了解新用户对现有项目的熟悉。这可以作为异常检测(AD)问题进行配置,该问题区分“普通用户”(名义)和“新用户”(异常)。考虑到物品的庞大数量和用户项目配对数据的稀疏性,在每个项目上独立应用传统的单任务检测方法很快就变得困难,而项目之间的相关性则被忽略。为了解决这个多任务异常检测问题,我们建议协作异常检测(CAD)共同学习所有任务,并通过任务之间的嵌入编码相关性来学习所有任务。我们通过条件密度估计和条件可能性比估计来探索CAD。我们发现:$ i $)估计似然比的学习效率更高,并且比密度估计更好。 $ ii $)提前选择少量任务以学习任务嵌入模型,然后使用它来启动所有任务嵌入是有益的。因此,这些嵌入可以捕获任务之间的相关性并推广到新的相关任务。
translated by 谷歌翻译
检测会计异常是财务报表审核中的反复挑战。最近,已经提出了源自深度学习(DL)的新方法来审核声明的基本会计记录的大量。但是,由于它们的大量参数,这种模型表现出固有不透明的缺点。同时,隐藏模型的内部运作通常会阻碍其现实世界的应用。该观察结果在财务审计中尤其如此,因为审计师必须合理地解释和证明其审计决定是合理的。如今,已经提出了各种可解释的AI(XAI)技术来应对这一挑战,例如Shapley添加说明(Shap)。但是,在经常在财务审核中应用的无监督DL中,这些方法在编码变量级别上解释了模型输出。结果,人类审计师通常很难理解自动编码器神经网络(AENNS)的解释。为了减轻此缺点,我们提出(重塑),该属性在汇总属性级别上解释了模型输出。此外,我们引入了一个评估框架,以比较XAI方法在审计中的多功能性。我们的实验结果表明,经验证据表明,与最先进的基线相比,重塑结果是多功能解释的。我们将这种属性级别的解释视为在财务审计中采用无监督的DL技术的必要下一步。
translated by 谷歌翻译
由于可以自主使用的广泛应用,无人驾驶汽车(UAV)一直脱颖而出。但是,他们需要智能系统,能够提供对执行多个任务的看法的更多了解。在复杂的环境中,它们变得更具挑战性,因为有必要感知环境并在环境不确定性下采取行动以做出决定。在这种情况下,使用主动感知的系统可以通过在发生位移时通过识别目标来寻求最佳下一个观点来提高性能。这项工作旨在通过解决跟踪和识别水面结构以执行动态着陆的问题来为无人机的积极感知做出贡献。我们表明,使用经典图像处理技术和简单的深度强化学习(DEEP-RL)代理能够感知环境并处理不确定性的情况,而无需使用复杂的卷积神经网络(CNN)或对比度学习(CL),我们的系统能够感知环境并处理不确定性(CL),我们的系统能够感知环境并处理不确定性。 。
translated by 谷歌翻译